Tag Archives: ITQ

About Cloud Foundry Service Brokers

Cloud Foundry offers consumers of the platform all kinds of backing services. Think of services like Mysql, Redis and RabbitMQ. Those services are offered to consumers through the Cloud Foundry marketplace.

To be able to create instances of the services in the marketplace and then bind them to an application, Cloud Foundry uses Service Brokers. A Service Broker implements the Cloud Foundry Open Service Broker API and takes care of provisioning of services. It also provides credentials to a service so an application can connect to the created service instance. The CF service broker API is a REST API specification. You can implement this API any way you like. Most service brokers seem to be written in either Golang or Ruby but it doesn’t realy matter in which language you implement the API. It doesn’t matter where and how you run it either. As long as the broker is reachable for the Cloud Controller Cloud Foundry will be able to consume it

In conversations with customers I noticed some misconceptions around service brokers so in this post I want to shed some light on what a service broker is and what it’s not.

Let me start out by listing what a Service Broker is NOT:

  • A Service Broker is not a reverse proxy of some kind
  • A service Broker is not a connector
  • A Service Broker is not a service in and of itself

So what does a service broker do?  Let’s walk through how a Cloud Foundry platform user would consume a Mysql database and map that to service brokers operations:

  • User lists content of the marketplace: cf marketplace
    • Cloud foundry will list all the services that are offered by registered service brokers
  • User creates a Mysql service instance: cf create-service mysql 100mb mydatabase
    • This command tells Cloud Foundry the user wants to consume to 100mb plan of the mysql service. The service will be referenced as “mydatabase” within Cloud Foundry. This won’t be the actual database name.
    • Cloud Foundry will call the “provision” API resource on the service broker that offers the mysql service
    • The service broker will now create a new database instance for the user and respond with an http 201 status back to Cloud Foundry
    • Cloud Foundry will save a reference to the service instance
  • Now the user wants to consume the database. He can do so by binding the created service the his application: cf bind-service myapplication mydatabase
    • Cloud Foundry will now do a call to the bind resource of the Mysql service broker API.
    • The broker will create a user for the mysql database and send a response to Cloud Foundry containing the connection details (URI, Username, Password) for the database server (not for the broker but the DB server itself).
    • Cloud Foundry takes the response and populates the VCAP_SERVICES environment variable for the application. This environment variable contains a JSON string with all the information of all the services bound to the app.
    • The app itself is responsible for parsing the json, getting the connection details and connecting to the database. From now the broker is no longer in the loop.

In summary: The broker presents services to Cloud Foundry, CF can request service plans from the broker and request connection details for created services. After that point the broker is out of the loop. It brokered the connection, now the application is directly connected to the service.

When a user no longer needs the service he can issue the cf unbind  command. This will remove the information from VCAP_SERVICES and tells the broker to initiate the unbind task. What exactly happens then depends on the broker but in the case of MYsql it will delete the user it created during the bind operation. After the unbind you can also issue a cf delete-service  command. This tells the broker to get rid of the service. In the case of Mysql it will delete the whole database.

In another post I will go into more detail on how to build you own broker.

Beyond automated deployment

I have been involved in quite a lot of automation projects over the last five years. All of them centered around VMware vRealize Automation and vRealize Orchestrator. During these projects customers throw all kinds of challenges at me. Most of which I can solve. Over the years however I found two challenges that go beyond automated deployment which I can’t really solve using vRA/vRO:

  1. If you update a vSphere template, how do you make sure all machines deployed from that template are also updated?
  2. If you change a blueprint, how do you make sure those changes are also made to existing deployments from that blueprint?

The answer two both really is: you can’t. Not If you’re using vRA/vRO. Dont’ get me wrong. I’m not trying to bash these products here. It’s just a result of how these products are designed and how they work.

In my opinion both problems boil down to the fact that in vRA blueprints you define the initial state of a deployment, not the desired state. So if you deploy a blueprint you get whatever was specified in that blueprint. Which is fine initially. But if you change the blueprint or update the template, nothing will be changed on the existing deployments. The other way around is true as well: If you change/damage your deployment, vRA won’t come in and fix it for you.

Now this seems obvious and not a big problem. After all: getting deployment times down from weeks to minutes using automation tools is a pretty good improvement in its own right. But if you think about it for a minute you’ll realize that when you have automated deployment, now you need to spent the rest of your days automating day 2 operations. After all the tool isn’t doing it for you.

For example you’ll have to introduce a tool which manages patches and updates on existing deployments. You also need to figure out a way to keep your template up-to-date, preferable automated. And if somebody breaks his deployment you need to spent time fixing it.

Now, if you’ve been following my blog recently you probably already guessed the solution to this problem: BOSH :). Here are four reason why BOSH makes your life as a platform operator easier:

  1. In BOSH a template is called a stemcell and stemcells are versioned. You don’t have to make you own, up-to-date versions of CentOS and Ubuntu stemcells are available online at bosh.io.
  2. When you’re using BOSH, software is installed on stemcells by using BOSH releases. Which are versioned, available online and actively maintained.
  3. A BOSH deployment defines a desired state. So if a VM disappears BOSH will just re-create it, re-install the software and attach the persistent disk. Also, when you update the deployment manifest to use a newer stemcell version, BOSH will just swap out the current OS disk with the new one in a few seconds and everything will still work afterwards.
  4.  All these parts can be pushed through a Concourse Pipeline! The pipeline will even trigger automatically when a new stemcell version, release version or deployment manifest version is available. Below is a screenshot of a very simple pipeline I build. This pipeline keeps both the software and the OS of my redis server up-to-date without me ever touching anything.

You can find the source files for this pipeline here. In real life you ‘d probably would want to add a few steps to this pipeline. First you deploy it to a test environment, then do some automated tests and then push it into production.

In summary: If you’re using BOSH not only do you get all the goodness of versioning and desired state config, it also enables you to employ Continuous Deployment for all your servers and software. You can even test new versions automatically so you don’t have to spent all your time just keeping your platform up-to-date.

What is Concourse CI?

This is my third blog in my “What is” series about different products that are part of the Cloud Foundry ecosystem. I discussed Cloud Foundry and BOSH earlier and now it’s time to for he next: What is Concourse CI?

So what is it?

The github tagline for the concourse project is “Continuous thing doer”. Which is quite accurate. Some would call it a Continuous Integration tool. It serves the same purpose as the well known tool Jenkins. It works quite different though. you can find a comparison between Concourse and and other CI tools here so I won’t go into details right now.

What is interesting to know though is that Concourse was born at Pivotal and is considered the standard CI tool for Cloud Foundry and related projects for a while now. The product was born out of necessity. Other tools just couldn’t deliver what the CF development teams needed. And what may even be more important: Other tools don’t follow the design principles all Pivotal and CF software is following. One of the most important ones being: “No snowflakes”.

Snowflake?

As you may know, each snowflake is different from other snowflakes. It’s unique. And that’s fine when we’re talking about real snowflakes. I’s not so fine when it concerns servers, especially if you’re running hundreds of them. If every server is special you have to run a backup for each one of them regularly, you need instructions on how to configure the server when it needs to be rebuild or DR’d. Troubleshooting becomes difficult because you don’t know how it needs to be configured. After all it’s different from all other servers so you have no reference.

In order to avoid snowflakes CF, BOSH and Concourse use text files to store configuration for Apps, Servers and Pipelines. If a server or app fails you can just blow it away and reload from the config file. Done.

If you are using Jenkins for your CI you probably did a lot of specific configuration on the Jenkins server. If you lost the server you would need to spent a lot of time re-configuring it or restore it from a backup. It’s different for Concourse Though. In concourse everything is stored in yml files. Concourse server is gone? Build a new one from scratch and reload your pipelines from yml files. You already know that works fine. After all that’s how the config got there in the first place.

Concourse concepts

Pipelines are first class citizen in Concourse CI. A CI pipeline is basically all the steps that need to be taken to get application code from the code repository all the way to production servers or at least a production release. Steps could be: download the code, build the code, run unit tests, run integration tests, deploy to Cloud Foundry.

concourse pipelines consist of resources and tasks. Jobs are used to compose resources and tasks. The pipline is describes in a yml file. Tasks can be describe in the same yml but are often described in external files. Since all this is stored in the same repo as the application code versioning tasks and pipelines becomes really easy.  For an example take a look at the ci folder in my demo app here. Below is a screenshot of what that pipeline looks like in the Concourse GUI

Screenshot from 2017-04-13 15-23-29

The online documentation for Concourse CI is excellent so I’ll be lazy and give you the link to the description of the concepts here in case you want to know more :).

Try it yourself

Before you run off and try it yourself let me tell you how to interact with Concourse. I already showed you the GUI. But know that the GUI is only intended to give a visual presentation of your pipelines. It is great so show on a big monitor in you dev teams office.

Creating the pipelines and some other configuration things are done through the fly cli. Which is nice, I hate taking my hands off the keyboard :).

If you want to try Concourse out for yourself then running the dockerized version is probably the fastest way to get going. If you read my blog post about BOSH and gave that a go yourself you might want to try to deploy Concourse using BOSH. To help you get started I shared my BOSH manifest below. I couldn’t get the HTTPS part working so I left that out for know.

 

 

The Why, What and how of Automation

Today my first ever whitepaper was published. It’s titled: The why, What and how of Automation. Here is the teaser:

The current digitization wave puts an ever increasing load on enterprise IT departments. At the same time the business is expecting shorter delivery times for IT services just to stay ahead of the competition. To keep delivering the right services on time enterprise IT needs a high degree of automation.

The whitepaper explains why automation is so important, what you need to automate and how this can be done. Those who attended my NLVMUG session might notice that this whitepaper has the same title as my presentation. That’s obviously not a coincidence. If you missed the session make sure to download and read the whitepaper here: http://itq.nl/the-why-what-and-how-of-automation/

I’ll be posting a few more blogs on some of the topics in the whitepaper as well so stay tuned :).

Invite: ITQ Technical Update Session

 
UITNODIGING:
ITQ Technical Update Session
Woensdag 25 juni 2014
 Einstein
Sorry to all non dutch speaking readers. The following invitation is in dutch because all sessions at this event will be in Dutch. Dennis and I will be presenting two sessions: “Shoebox sized datacenter” and “Datacenter Robotics.

De virtualisatie specialisten van ITQ volgen de ontwikkelingen en trends in de markt nauwgezet door onder andere het bijwonen van evenementen zoals; VMworld, VMware Partner Exchange, VMUG, Storage Field Days, DevOpsDays etc.
 
Een belangrijke trend is het “Software Defined DataCenter”. Maar wat is nu eigenlijk het Software Defined DataCenter, wat zijn de voordelen en mogelijkheden en kan dit in mijn huidige Datacenter?
 
Tijdens deze avond gaan we in op de verschillende factoren van het Software Defined DataCenter, zoomen we in op Software Defined Performance, Software Defined Storage en de automatiseringsmogelijkheden welke Software Defined ons biedt.
 
De verbindende factor tussen onze consultants is de passie voor technologie, deze passie willen we graag met je delen op deze avond.
 
Sessie 1:
Software Defined Performance met PernixData FVP
Ontdek hoeveel performance met de huidige technologie haalbaar is in een datacenter ter grootte van een schoenendoos! Dit laten we zien door middel van een presentatie ondersteund met infographic achtige visualisaties met aansluitend een live demo op het door ITQ zelf gebouwde mobiele datacenter.
 
De presentatie zal ingaan op wat de verschuiving van traditionele disks naar flash inhoud voor onze datacenters en hoe opslag capaciteit los gezien kan worden van performance. Hierbij worden de mogelijkheden van PernixData FVP besproken en wordt inzicht gegeven in nieuw aangekondigde features.
 
Sessie 2:
Software Defined Storage
De ITQ vUnit heeft in drie teams een serie testen uitgevoerd waarbij objectief gekeken is naar o.a. performance, features, stabiliteit, robuustheid, ease of use, et cetera. Na een korte toelichting op deze disruptieve marktontwikkeling en de producten die we getest hebben, zal ITQ de testresultaten tijdens een interactieve paneldiscussie geheel uit de doeken doen!
 
De producten welke getest zijn:
·         Maxta Storage Platform,
·         EMC ScaleIO
·         VMware Virtual SAN
 
Sessie 3:
VMware vCloud Automation Center / vCenter Orchestrator
Deze sessie maakt duidelijk waarom een datacenter een robot nodig heeft om het Software Defined Datacenter te besturen en waar je die robot kunt vinden. We zullen laten zien dat met software die iedere vSphere gebruiker tot zijn beschikking heeft, nagenoeg alles is te automatiseren. Daarnaast zal worden ingegaan hoe deze robot verschillende processen kan automatiseren door het inzetten van vCloud Automation Center.
 
Agenda        
18.00 – 18.30:     Ontvangst met soep en broodjesbuffet
18.30 – 19.30:     Sessie 1: Software Defined Performance met PernixData FVP
19.30 – 20.00:     Sessie 2: Software Defined Storage (Panel)
20.00 – 20.15:     Pauze
20.15 – 21.00:     Sessie 3: VMware vCloud Automation Center / VMware vCenter Orchestrator
21.00 – ??.??:     Afsluiting + netwerkborrel
 
Locatie:       Hotel Vianen
                        Prins Bernhardstraat 75
                        4132 XE Vianen
 
Er is beperkte ruimte beschikbaar, dus zorg ervoor dat je je op tijd inschrijft voor dit evenement. Schrijf je hier in! Wij hopen je te mogen verwelkomen.