Tag Archives: vRealize Orchestrator (vCO)

Automating vRA IaaS Blueprint creation with vRO

In some situations it might be very convenient to be able to automatically create vRA (vCAC) Iaas blueprints. However, since the IaaS part of vRA uses an OData API doing so is not a trivial task. OData is basically just a representation of the IaaS database. So there is no logic in front of it and thus no way to tell the API: “Create a blueprint for this VM please”. Previsously I talked a bit about the vCAC API. In this post I’ll get more practical and explain the steps involved when automating vRA IaaS blurptint creation with vRO.

Entities

All objects in the IaaS OData API are called entities. It doesn’t matter if it is a blueprint, a host or a build profile, everything is an entity. So in order to create a new blueprint we have to create a new entity. Which is done with the following line of code:

the createModelEntiy method  needs a couple of input parameters. The first one is easy, it’s the id of the vCACServer. This is the IaaS server object in vRO not the vCACCAFE host. You can just configure this server as an input of the workflow.

The second parameter is always “ManagementModelEntities.svc” and the third one is basically the table name of where the entity should end up. In the case of a blueprint entity it’s always “VirtualMachineTemplates” because that’s the table where the blueprints live.

Now for the tricky part: the parameter and links values.

Parameters

The parameters for the entity are basically the properties of the object. The trick is figuring out which properties to use. There are a couple of methods. You could create a simple workflow that just dumps all the properties of an existing blueprint entity in a log. I prefer using LINQpad because it also shows you the relations between different tables.

So which properties do we need for the blueprint entity and how do we put them into the parameters variable? The script below shows how to do this.

Obviously you might want to change the TenantID or get it from a variable.

Links

An entity object is basically an entry in a database table. This table links to other tables. So when creating a new entity you need to define to which elemetns in other tables this entity links to.

For blueprint entities there are 5 links to be set:

  • InterfaceType (vSphere in this case)
  • HostReservationPolicy (The reservation policy to deploy to)
  • ProvisioningGroup (A.ka Businiss Group)
  • WorkflowInfo (This is the kind of deployment you want. So for a BP that clones a template you need the Clone Workflow WorkflowInfo entity… still with me?)
  • GlobalProfiles (A.k.a Build Profiles)

This is how you put those links into a links object:

As you can see each attribute in the object is an array. The content of the array are vCAC Entities.  Most arrays have one value only the globalProfiles has multiple values if you have more then one build profile selected. In the code above I the buildProfiles variable is defined somewhere else and it is already an array so I left out the [ ].

Finding links entities

I guess you’re now wondering how to get the entities object for the links. You need to use the vCACEntitieManager to find these entities. Here is an example of how to find the WorkflowInfo entity for the clone workflow:

You can find the reservation policy by name in a very similar way:

If you have an array with build profile names as in input you can find all the corresponding entities with this piece of code:

If you want more of this in a workflow that’s ready to run, see the link at the end of this post.

Properties

So now we are able to create an entitie with the right parameters and links. After that’s successfully done there is one more thing we need to do: configuring the custom properties on the blueprint.

There are a couple required properties which start with a double underscore. Here is the list I used:

__buildprofile_order (order in which the buildprofiles are applied)
__clonefromid (id of the virtualmachineTemplate entity we created)
__clonefrom (name of the vmtemplate entity we created)
__clonespec (name of the customization spec in vCenter)
__displayLocationToUser (false)
__menusecurity_snapshotmanagement (false)
VirtualMachine.DiskN.IsClone (true for each disk)
VirtualMachine.DiskN.Size (the size of each disk)

You can set these properties with the “Update Property to Blueprint” workflow that comes with the vCAC plugin.

I used this script to generate the buildprofile order:

 Just give me the workflow

Too much information? I uploaded the example workflows to flowgrab. Find the download here.

A word of warning: The workflows might assume you’re using the vsphere.local tenant in vCAC. Should be easily fixable. If I’ve got some timeleft in the near future I might fix that myself. If you’re using another tenant you should be able to easily makes this work for you anyways.

webEye – The webhook receiver

When building out the demo environment which I was going to use for our NLVMUG UserCon presentation I came accross a problem: I wanted to notify my private cloud whenever a new Docker image was build on Docker Hub. This proofed impossible with the existing VMware software so I created my own solution. And here it is: webEye – The webhook receiver. It will simply forward all received webhooks to an AMQP bus after checking that it’s a valid webhook message. You can pick up the message with your favourite orchestration tool and act on them.

docker

webEye

Every hook needs an eye to hook into. That’s why my little app is called webEye 🙂

nodejs

webEye is written in JavaScript and runs on node.js. It is designed to run in a docker container. However, it already evolved fom something that was originally intended to just receive docker hub webhooks. Currently it also has support for my “Magic Button” and even for vRealize Operations. Other web hook senders might follow.

Getting started

As I said, webEye is developed to run in a docker container so this “getting started” will only cover how to start the app in a docker environment.

  • All received hook are forwarded to an AMQP bus. So let’s start an AMQP Server: docker run –name rabbit -p 5672:5672 -p 15672:15672 dockerfile/rabbitmq
  • Now start webEye: docker run -p 80:80 -p 443:443 -e “DHKEY=12345” -e “MBKEY=12345” –name webEye –link rabbitmq:rabbit -t vchrisr/webeye
  • The DHKEY in the line above sets the API key that you need to send with the request. This adds a bit of security. Make sure to put in a random string instead of “12345”.  tip: random.org
  • Now make sure port 80 on your webEye server is mapped to a public ip address
  • Open the now running webEye page in a browser to to get it running. This first visit actually triggers phusion passenger in the container to start the node.js app. This in turn creates a persistent exchange on the rabbitMQ server.
  • Create a webhook on your docker hub repository to http://{your public ip}:{public port}/dockerhub?apikey=12345
  • Connect your consumer to the rabbitMQ server
  • Create a new Q to receive your messages
  • Create a binding which routes messages with routing key webeye.docker.hub to your Q
  • Create a suscription for the Q you created
  • If you’re using vRO you can now create a policy which triggers a workflow when a message appears in the subscription.
  • Create a workflow that does whatever you want when a docker hub hook is received.

Testing webEye

If you were able to make webEye available on the public internet and you’ve configured a webhook on your docker repo you can

  • now simply click “test” on the webhook configuration page.
  • To test offline I usually use the Firefox RESTClient plugin.Select “POST” as the method.
  • Enter this url: http://<ip of webEye machine>:<port>/dockerhub?apikey=<apikey>
  • Add this header: Content-Type: application/json
  • For the body you need some actual content. webEye will check the presence of some specific fields in the json to make sure it’s a Docker Hub webhook. I usually use the json from the Docker Hub Documentation:

resttest-webeye

Orchestrator JavaScript speed test: .map()

A while ago I wrote a blog post in which I showed the performance difference between the array prototype function .indexOf() and a for each loop. Now it’s time for the second part of the series: Orchestrator Javascript speed test: .map()

Test setup

The test setup is identical to the setup I described in the previous post. Same machine, same vCO appliance. I did change the script that generates the test array slightly. Instead of a string I now store an object in each array elemen. Here is the script:

Both tests are done in the same workflow so they are as close together as possible to get the same circumstances of the tests.

map vs for each workflow

 Mapping an Array

Mapping an array into an other array means running some action on every element of the array and returning the value of that action into a new array. What I will do in this test is taking one attribute of the object that is stored in each array element and create a new array that only consists of that one attribute. This makes it easier to  search for the right index number later on using .indexOf()

  • So here the content of one array element: { number: “1”, value: value=1 }
  • And what we want as an end result is an array where each element just contains “value=1” for example.

There are basically two ways to do this. You can either use the prototype function .map() or create your own loop. Let’s try the prototype function first.  map() takes takes a function as an argument. Whatever the function returns is stored in the active element of the target array.

Below is the result of this test:

So the map action took 94 milliseconds. But over a couple of test runs I did get different results. Ranging from 119 to 86mS. Now let’s try a for each loop to see how long that takes:

And here are the results:

So this particular run took 106 milliseconds. But again as with the .map I don’t get consistent results. I’ve seen values everywhere between 82 and 139 mS.

I run both test sequentially in the same workflow. And even the difference between both methods is not the same. Sometimes the map() is faster, sometimes the loop is faster.

Conclusion

I cannot definitively say which method is faster. The only thing I can say for sure that they are about the same speed. But if you ask me which method I prefer the answer is: .map()! Why?  Because if I read somebody elses code and I see the .map being used I know something is being mapped. But if I see a pof each loop I have to go through the whole loop to understand what’s going on. In the example above the loop might be simple but in real life it can get complicated pretty quick.

 

Automating vRA (vCAC) using vRO – Split Brain

Recently I have done some work on automating vRA (vCAC) using vRO (vCO). This meant I had to dive into the vCAC APIs. The bad news is that this felt like diving into a pool of dark muddy water. The good news is that I’m still alive, my headache is gone and I’ll try to capture some of the things I learned on this blog.

Split brain

In this post I’ll start out with an introduction to the vCAC APIs. Yes, plural. Not just one API.

vCAC ahem… VRA is actually not just one product, it’s two products which are loosely coupled and sold as one. The first product is the vRA Appliance also known as CAFE. This is a new product that was introduced with vCAC verion 6.0. It is developed in Java (springsource), runs on linux, uses postgres as a data persistence layer, seems to use a micro services architecture , supports multi-tenancy and provides a REST API.

But there also is the old product that was originally developed at CreditSuise, spun off as DynamicOps and then acquired by VMware. It was sold as vCAC 5.x, is developed in .net, uses an MS SQL back-end, runs .net workflows, has no notion of multi-tenancy and provides an OData API. This part is usually called the Iaas Part.

The two products are also reflected in two separate vCO ahem… vRO Plugins. Although you download and install just one package there are really two plugins installed. One is called VCAC and has the description “vCloud Automation Center Infrastructure Administration plug-in for vCenter Orchestrator” the other one is called CAFE and is described as “vCloud Automation Center plug-in for vCenter Orchestrator”.

Confusing. Right? So let’s clear things up:

CAFE is the virtual appliance. All new features are developed in CAFE. So anything that was added since 6.0 runs on the appliance and can be used from the REST API. On top of that some functionality was moved to the appliance. Functionality running in CAFE in version 6.1 includes:

  • Business Groups and Tenants
  • Advanced Service Designer
  • The Catalog
  • Resource Actions
  • Approval policies
  • Notifications

So if you want to automate anything regarding any of these features you’ll need the CAFE plugin which talks to the REST API running on the virtual appliance.

IaaS is the name of everything that’s not on the appliance. It is the reason you need a windows server to run vRA, not just the appliance. This windows server (or multiple servers) runs the old DynamicOps Software with some modifications. Features provided by this part of vRA include:

  • Virtual Machine Blueprints
  • Machine Prefixes
  • Provisioning Groups (Maps to Business Groups in CAFE, GUI only knows Business Groups in the current version)
  • Reservations
  • VirtualMachines (vCAC VM objects which map to vSphere/vCloud VMs or even physical machines)

If you want to automate any of the above you’ll need to use the vCAC plugin or the Odata API. If you’re note familiar with Odata APIs there is something you should know: It’s not an actual API. It’s just a representation of the database. There is no application logic behind it, just database constraints. This means that creating new things (called entities) is rather difficult. You have to figure out all the links between different database tables yourself. I’ll try to dive into this deeper in another blog post.

There another peculiarity I want to point out: there is no multi-tenency in the IaaS part. This means that a lot of items from the IaaS part (for example: machine prefixes) are shown to all tenants!

Touchpoints

The fact that vRA basically has a split brain provides some challenges when automation things in vRA. For Example: You’ll have to create a blueprint in the IaaS part but when you want to publish it you have to create a catalog item in the CAFE part of the product. Which brings me to the last part of this post.

As I said before the two product are loosely coupled. The actual touchpoints are not documented. Or at least I couldn’t find anything. But after spending a lot of hour trying to find out how to autmate the publishing of blueprint I found these touchpoints between both APIs:

  • The Business Group ID in CAFE is identical to the Provisioning Group ID in IaaS. If you create a Business Group in the REST API then vRA also creates the ProvisioningGroup in IaaS for you.
  • The catalog actually consists of three catalogs. More on this later. One of the catalogs is the provider catalog. Each provider manages its own provider catalog. IaaS is on of the providers. Somehow CAFE knows where to find certain provides IDs. Not sure where to find or set that mapping.
  • Every Catalog Item has a providerBinding attribute. This contains the bindingId. This binding ID is the blueprint ID (virtualMachineTemplateID) from the IaaS Part. This is how vRA figures out which blueprint to deploy when you request a catalog Item.
  • A Resource Operation has bindingId which maps the CAFE action to the IaaS action (like powerOn a VM for example)

Orchestrator Javascript speed test: IndexOf()

As you might know Javscript is the scripting language used in vRealize Orchestrator. So while I’m not a web developer  I  use a lot of Javascript. When handling arrays in my scripts I tend to use a lot of prototype functions like .map, .forEach, indexOf and a couple others. But when I go through the library workflows I see a lot of of for each loop with some ifs and a break instead of the prototype functions being used. I have some opinions on this which I will share later. For now I was just wondering which method is faster, using the prototype functions or using your own loops. To answer this question I decided to do some speeds tests. This is the first post about these tests: the Orchestrator Javascript speed test: indexOf()

Setting up the test

To be able to measure a difference in performance I needed a significantly large array. I settled on an array with 100.000 elements as this seemed to take enough time to loop through to see some actual performance difference between different methods. I executed the tests on a vCO 5.5.2 Virtual Appliance running on my laptop. So if you run the appliance on a faster machine you might need a bigger array.

I used this script to create the array:

The actual speed tests are in the same workflow as the array generation script. This way both tests are ran as close together as possible to ensure the same circumstances for both tests.

indexof-flow

Finding the index of a value

Imagine you have an array and you want to figure out in which element a certain value is stored. There are two ways to do this. The easiest is using the .indexOf() prototype method. Alternatively you could use a for each (..) loop. To find out which method is the fasted I generated an array with 100k elements. the value in each element is the string representation of the index number. On the array I executed the code below:

This piece of code searches for the value “99999” in the array elements. We already know that is the very last element of the array so this measures how long the function takes to loop through the whole array while still validating that the actually works correct.

Below is the result of this script.

So the total time elapsed for the indexOf() method is 52 milliseconds.

Let’s compare this to a for each loop.

And here are the results:

So this run took 166mS. Which is more than 3 times slower than the .indexOf() prototype method.

Conclusion

Not only did I have to write more code to achieve the same result, the execution of the code also takes more than 3 times longer to execute the code. Obviously if you hit the target earlier in the array or use a smaller array the difference would be smaller. Still, It doesn’t make sense to write more code that is slower, harder to understand and not maintained by the software vendor.

So please: use the .indexOf array prototype method when searching the index for a specific value.

 

vRealize Orchestrator 6.0 New Features

With the release of vRealize Automation 6.2 VMware also released vRealize Orchestrator 6.0. In this post I’ll explain the new features.

You’ll find vRO 6.0 on the vRA appliance. There is no stand-alone virtual appliance or installable version for vRO 6.0 at this point in time. Sources tell me this will be released with vRO version 6.0.1. vRA 6.2 is build to work with vRO 5.5.2 so if you want to use an external vRO server that’s the version you’ll be using. Unfortunately that means you’ll miss out on these new features of vRO 6.0:

Switch

If you’re familiar with javascript or any other scripting language you have probably used the switch case statement before. It selects a code block based in the value of a variable. Orchestrator already supported this inside scripts but now there is a switch element you can drag into a workflow. This way you can fork your workflow into different flows based on the value of a variable.

This is what it looks like in a workflow schema:

 

switch-element
The picture below shows the configuration of the Switch element:

switch-config
ou can add rules to the switch element by clicking in the green plus icon. For each rule you can select a variable (in this case “someVariable”), the matching operator (Equels, contains, match and Defined) and for some operators the value to match to.

These switch rules work like firewall rules, only the first match is used. That’s why you can move the rules up and down to change the order of the rules.

Default error handler

Another new element is called “Default error handler” . When you drag it into the schema it looks like this:

default-error-handler
It is actually not connected to anything in you schema. It will be executed when an unhandled exception occurs in the workflow. Added to the workflow from the previous example it looks like this:

error-handler

As you can see in the schema the default error handler allows you to run certain actions whenever an error occurs in the workflow that is not handled by any other error handler. So this gets rid of all the red lines to one error handler. Neat!

VMworld and the future of Orchestrator

Last week I attended VMworld Europe in Barcelona. I had a great time, eating tapas, drinking Rioja and learning something new in between. I already wrote about elasticity achieved using project fargo and docker on my company blog. Since this blog is more automation focussed I wanted to highlight some automation news. OR actually it is more about the future of Orchestrator.

The first thing that stood out to me was the lack of vCenter Orchestrator uuhh vRealize Orchestrator break-out sessions. I think there were two or three session explicitly about Orchestrator. A couple others went a little bit into orchestrator but were focused on vRealize Automation (vCAC). Last year there were quite a couple of sessions about Orchestrator. Telling us it was the best kept secret or the best VMware product never released and we should really go and use this awesome tool. And of course they were right to say so. And seeing where VMware is going with Orchestrator I was really surprised they didn’t give it more attention during Vmworld.

Which brings me to my second point. It is clear by now that Orchestrator will be used as the back-end for vRealize Automation. We can already see this in the current versions: The integration with NSX is completely implemented using Orchestrator. vCAC ugh… vRA has no interaction with NSX whatsoever, everything is handled via Orchestrator.

The same goes for what VMware calls Anything as a Service. Which is delivered using the Advanced Services Designer. Yeah that’s a lot of buzzwords in one sentence. In reality it is just a forms designer which you can use to design user front-ends for Orchestrator workflows. The objects created by the workflow can then be managed by vRealize Automation.

I already see that the adoption of Orchestrator is mainly driven by the use of vCAC. But there is more to come. VMware told in one of the Orchestrator sessions that Orchestrator will be used as a DEM replacement for vRealize Automation (but any information given in such presentation may change at any time). For who isn’t familiar with vCAC/vRA; The DEM is the Distributed Execution Manager. It is basically the component which does the actual work in a vCAC deployment. Currently it is 100% .net code and runs MS .NET workflow foundation workflows. So it makes total sense to replace that workflow engine with VMwares own workflow engine. The result will be that some day we can get rid of the windows components in vCAC and end up with just a virtual appliance which is easy to deploy and configure. That day will be a good day.

To be able to use orchestrator on the scale that vRA requires there will be some changes to the product in the future. For example, better permission management, multi geographical deployment models, integration with DevOps solutions and a lot more.

So although Orchestrator didn’t get a lot of attention during VMworld it seems it is going to play a crucial role in VMware’s automation strategy. Nice 🙂

vCO exception handling 101

I come across quite a few workflow that look like the screenshot below. Since I don’t see the point in doing it like this I thought I’d do a quick vCO exception handling 101.

exception

In the screenshot above you’ll notice the red lines pointing to the red exclamation mark. The red lines are followed when an unhandled exception occurs in the the action, workflow or script. But just pointing the exception to an exception exit of the workflow makes no sense at all! (until somebody convices me otherwise…). It just makes the workflow look cluttered in my opinion. Believe it or not, the workflow below will handle any exception in exactly the same way as the workflow above.

no-exception

See? That’s a lot les cluttered. And the “corner” in the workflow isn’t even necessary, it could have just been one straight line of workflow items. If there is an exception the workflow will fail at the point of the error which makes troubleshooting easy.

So when do you use the red lines? Whenever you want to handle the error in some way. So don’t point a red line to a red exclamation mark but to a part of the workflow that handles the error. The example below shows a retry loop. I use this quite often for workflows that connect to other systems. You know, connections fail sometimes for whatever reason so you better retry a few time and sleep a bit in between. If you run out of retries then you throw the exception.

retry

By the way: Make sure you put the sleep after the decision, not in front of it. Why wait and then discover you are out of retires anyways?

That’s it for now. More on exceptions later.

vCO LockingSystem

Whenever you are using external resources in vCO you might run into a race condition. This can happen when a workflow using an external resource is running multiple times simultaneously. To avoid data consistency problems you can use the vCO LockingSystem.

Why lock?

Imagine you have a vCO workflow which reads a counter from a web API, then changes the data, lets say add 1 to the counter, and writes it back to the API. What happens if you start the workflow multiple times simultaneously? You could run into a situation where one workflow reads the data and modifies it but has not yet written it to the API, then the second workflow reads the not yet updated data and modifies it. Then the javascript engine decides to execute the API call in the first workflow and then the API call in the second workflow. Now both workflows are using the same counter value while they were supposed to get a unique value from the API.

A real life example for this is retrieving a hostname sequence number from a vCAC hostname prefix. This process involves reading the current number from the vCAC API, increasing the number and then sending the new number back to the API. Btw: why the api itself does not handle the increase of the number remains a mystery to me.

To prevent above scenario from happening you want to lock the resource before reading and updating it so any other action using the resource has to wait until you are done.

Using the LockingSystem

Locking in vCO is done using the LockingSystem scripting object. To acquire a lock you use:

This method tries to acquire the lock once and then returns a boolean which tells you if the lock was acquired or not. If you want the workflow to wait until the lock is acquired you can use

The parameters lockid and owner are strings. The lockid identifies the object you are trying to lock. So just use a name that makes sense. The owner can be anything, I usually use the name of the workflow which is acquiring the lock. Below is an example of a scriptable task which acquires a lock. Note that this script will wait until the lock is acquired.

LockingSystem.lockAndWait example

After the lock is acquired you can get, modify and write data to the external resource. After that you release the lock and start using the data. Below is a screenshot of a workflow that uses locking.

Locking example workflow

Remember to release the lock in case something goes wrong. If you don’t handle exceptions for the tasks between acquiring the lock and releasing it, you can end-up with a lock that never get’s released. If you do end up in that situation you can run a workflow which contains a single script with this line:

 

How to Automate: vCAC

This is the third blog in my “How to automate” series. In the first posts I dicussed PowerCLI and vCO. In this blog I will talk about vCAC.

vcac-splash

vCAC = VMware vCloud Automation Center. It was originally developed by Credit Suisse, then spun off into a company called Dynamic Ops before it was acquired by VMware. It is software that takes care of the buniness side of automation. It presents a portal on which users or admins can request machines and services. The logic inside vCAC determines who can request which services and where machines are deployed. After a VM or service is requested it  manages the lifecycle of the object. I use the word “object” on purpose here because it can be a virtual machine as well as a physical server. And since version 6 it can even be any kind of service instead of an actual server.

The Upsides

The big upside of vCAC is that it is a very versatile tool. Especially if you have the cloud developer license you can make the software do virtually anything. And even without it you are able to do a whole bunch of stuff. This includes deploying machines to both your internal cloud as well as public clouds like Amazon or vCloud powered clouds. You could also provision storage or networks if you need to. So the software was not only developed in Switserland, it’s also versatile like a Swiss army knife.

Another big plus in my opinion is that it integrates with vCenter ORchestrator so well. This is one of the reasons vCAC can do almost anything even without the developer license. If you can do it with vCO, you can integrate it with vCAC.

vCAC delivers an end user portal which doesn’t require you to do any web development. This makes building a portal very easy and leaves more time for you to focus on the automation that happens behind the portal.

vCAC also makes it possible to import existing virtual machines. This is something that is very hard with vCloud Director. This makes vCAC a much better fit than vCD for companies running their own private datacenter (or should I say “cloud”?). For public cloud providers vCD is still the way to go in my opinion although using vCAC is not impossible.

The Downsides

The fact that vCAC is so versatile is also a downside. vCAC needs a lot of configuration before it does what you want it to do. And that’s before you start digging in to custom workflows. There are 10 workflows which you can customize out of the box. If you want more you need the developer license. On top of that you need .NET knowledge to actually develop custom workflows.
And in depth .NET knowledge is not something you find in a regular vCloud admin.

But there is a way around this. That way is called vCO. You can call vCO workflows from the default vCAC workflows. This will solve most of your integration problems and I really like this approach.

Speaking about vCO integration, that is actually my next downside. Don’t get me wrong, I really like the vCO integration. It just leaves me wondering why I need a bunch of virtual machines, two database servers, an additional SSO and a lot of complexity to have a portal which basically calls vCO workflows and does some life cylce management. It just doesn’t feel right. Ok, vCAC can do a bit more then that. Deploying physical machines for example or creating datastores on NetApp storage. But most companies who will be using vCAC will probably not use these features. So they are stuck with an overcomplicated vCAC setup which has features that are already covered by vCO.

And that brings us to my last downside. The vCAC infra itself can get rather complex if you want to set it up in a redundant and scalable way. To VMware: Please simplfy this. I like the VA approach that was introduced with version 6 but I don’t like setting up 6 additional Windows machines and 5 loadbalancers.

The Right Tool for The Job?

When should you use vCAC? I my opinion you should use it when you regularly deploy new virtual machines or other services and you want to automate that process. Using vCAC forces you to automate every single step in the deployment process. This includes creating change requests, registering IP addresses, creating the VM, assigning a network and so on. This makes it possible for application administrators and developers to request machines without any intervention of a cloud administrator. And it doesn’t stop there. You can also automate the decommissioning process, limit the amount of resources someone can consume or allow them to deploy machines on Amazon instead of your internal cloud.

But remember: vCAC always brings a friend to the party: vCenter Orchestrator. But in my opinion that only makes the party better.