Tag Archives: docker

webEye – The webhook receiver

When building out the demo environment which I was going to use for our NLVMUG UserCon presentation I came accross a problem: I wanted to notify my private cloud whenever a new Docker image was build on Docker Hub. This proofed impossible with the existing VMware software so I created my own solution. And here it is: webEye – The webhook receiver. It will simply forward all received webhooks to an AMQP bus after checking that it’s a valid webhook message. You can pick up the message with your favourite orchestration tool and act on them.



Every hook needs an eye to hook into. That’s why my little app is called webEye 🙂


webEye is written in JavaScript and runs on node.js. It is designed to run in a docker container. However, it already evolved fom something that was originally intended to just receive docker hub webhooks. Currently it also has support for my “Magic Button” and even for vRealize Operations. Other web hook senders might follow.

Getting started

As I said, webEye is developed to run in a docker container so this “getting started” will only cover how to start the app in a docker environment.

  • All received hook are forwarded to an AMQP bus. So let’s start an AMQP Server: docker run –name rabbit -p 5672:5672 -p 15672:15672 dockerfile/rabbitmq
  • Now start webEye: docker run -p 80:80 -p 443:443 -e “DHKEY=12345” -e “MBKEY=12345” –name webEye –link rabbitmq:rabbit -t vchrisr/webeye
  • The DHKEY in the line above sets the API key that you need to send with the request. This adds a bit of security. Make sure to put in a random string instead of “12345”.  tip: random.org
  • Now make sure port 80 on your webEye server is mapped to a public ip address
  • Open the now running webEye page in a browser to to get it running. This first visit actually triggers phusion passenger in the container to start the node.js app. This in turn creates a persistent exchange on the rabbitMQ server.
  • Create a webhook on your docker hub repository to http://{your public ip}:{public port}/dockerhub?apikey=12345
  • Connect your consumer to the rabbitMQ server
  • Create a new Q to receive your messages
  • Create a binding which routes messages with routing key webeye.docker.hub to your Q
  • Create a suscription for the Q you created
  • If you’re using vRO you can now create a policy which triggers a workflow when a message appears in the subscription.
  • Create a workflow that does whatever you want when a docker hub hook is received.

Testing webEye

If you were able to make webEye available on the public internet and you’ve configured a webhook on your docker repo you can

  • now simply click “test” on the webhook configuration page.
  • To test offline I usually use the Firefox RESTClient plugin.Select “POST” as the method.
  • Enter this url: http://<ip of webEye machine>:<port>/dockerhub?apikey=<apikey>
  • Add this header: Content-Type: application/json
  • For the body you need some actual content. webEye will check the presence of some specific fields in the json to make sure it’s a Docker Hub webhook. I usually use the json from the Docker Hub Documentation:


CoreOs now fully supported on VMware products

Last week CoreOS released an OS image which included the open-vm-tools. Of course it was possible to run CoreOS on VMware before, but something was missing. With the addition of the open-vm-tools CoreOs is now fully supported on all VMware products. This includes vSphere 6 and vCloud Air.


I happened to be working on my demo for the Dutch VMUG UserCon which involves CoreOS as well. So I decided to give it a go as soon as the image was released. And it turns out it works perfectly. I no longer have to build sleeps in my workflows, I can just wait until the VMware Tools are online and then continue the workflow. This makes deploying CoreOS much more efficient and reliable. Aslo, the gracefull shutdown finally works which prevents the OS from getting corrupted when I have to force a reboto from a workflow.

I ‘ll write in more detail about my automated CoreOS deployment in the coming weeks. If you happen to live in the Netherlands then come and see our demo and presentation this thursday (19-march-2015) at the Dutch VMUG UserCon.

Want to get started with CoreOs yourself? check out this blog post for instructions on how to do this on VMware Fusion. Want to run on vSphere? Here is what I do to download the image on a linux machine and get it to vSphere.

then import the ovf to vSphere using the vSphere (web) client. To start using the image you’ll also need a configdrive ISO file. How to create this is aslo explained in this article.

My personal VMworld news highlights

The 11th edition of the annual VMworld conference is taking place this week in San Francisco. I am not there this year but I have been following the new coming from the event closely. There are a lot of blogs out there about all the obvious highlights of the event. Like the new NSX version, vCloud Air, EVO, VMware Workspace and the rebranding of all management tools to vRealize. However, there are a few announcements which really caught my attention. So here are my personal VMworld news highlights


Containers are a way to run multiple applications isolated from each other on the same OS. So it’s like virtualization inside the OS instead of underneath the OS. This technology has been in use by Google for years. It is their primary way of deploying applications. So it’s not a new technology but still it wasn’t used a lot outside the big web companies. But that is changing rapidly with the introduction of Docker. Docker makes application containers portable. Very much like x86 virtualization made systems portable.

One could argue that containers render virtual machines obsolete. But in many cases combining VMs and containers will be the best solution. As Kit Colbert put it in this session: VMs are used for security and multitenancy and Containers are used for reproducibility.

VMware recognized the value of containers and at VMworld they announced that they will be working with Docker to integrate in in their product lines. As you can read in this article VMware will be using docker in the future to deliver their own software. They will also be collaborating with Docker on opensource projects.

I think this is a great development. Application deployment can be difficult and VMware has currently now technology to make it any easier. Apart from Application Director maybe but that’s just a glorified script launcher, hardly a new technology. docker will make the lifes of those responsible for deploying applications a lot easier.

Project Fargo

Project fargo is also called VM fork. And that exactly describes what it is: It is a technology which makes it possible to fork a running VM. In other words, spin up a copy of a running VM without having to boot anything. Combine this with containerized applications and you’re able to scale out an application in a second. You can read a bit more about it here.

Open compute project

VMware announced that they are joining the Open Compute Project. I have written about OCP before and I am still following the project closely. I really like the hardware designs because of their efficiency. Now VMware support vSphere 5 on both the AMD and Intel compute nodes. This is good news for are service providers out there running on vSphere 5. The OCP hardware is a lot cheaper and more energy efficient than any other server, which means they’ll be able to offer better value for money.